在本文中,我们考虑点击率(CTR)预测问题。因子化机器及其变体考虑配对特征交互,但通常我们不会由于高时间复杂度而使用FM进行高阶功能交互。鉴于许多领域的深度神经网络(DNN)的成功,研究人员提出了几种基于DNN的模型来学习高阶功能交互。已广泛用于从功能嵌入到最终登录的功能嵌入的可靠映射,从而广泛使用多层。在本文中,我们的目标是更多地探索这些高阶功能的交互。然而,高阶特征互动值得更加关注和进一步发展。灵感来自计算机愿景中密集连接的卷积网络(DENSENET)的巨大成就,我们提出了一种新颖的模型,称为殷勤基于DENENET的分解机(ADNFM)。 ADNFM可以通过使用前馈神经网络的所有隐藏层作为隐式的高阶功能来提取更全面的深度功能,然后通过注意机制选择主导特征。此外,使用DNN的隐式方式的高阶交互比以明确的方式更具成本效益,例如在FM中。两个真实数据集的广泛实验表明,所提出的模型可以有效地提高CTR预测的性能。
translated by 谷歌翻译
个性化联合学习(FL)促进了多个客户之间的合作,以学习个性化模型而无需共享私人数据。该机制减轻了系统中通常遇到的统计异质性,即不同客户端的非IID数据。现有的个性化算法通常假设所有客户自愿进行个性化。但是,潜在的参与者可能仍然不愿个性化模型,因为他们可能无法正常工作。在这种情况下,客户选择使用全局模型。为了避免做出不切实际的假设,我们介绍了个性化率,该率是愿意培训个性化模型,将其介绍给联合设置并提出DYPFL的客户的比例。这种动态个性化的FL技术激励客户参与个性化本地模型,同时允许在整体模型表现更好时采用全球模型。我们表明,DYPFL中的算法管道可以保证良好的收敛性能,从而使其在广泛的条件下优于替代性个性化方法,包括异质性,客户端数量,本地时期和批量尺寸的变化。
translated by 谷歌翻译
联合学习是一个分布式机器学习机制,本地设备在中央服务器的编排中协作培训共享全局模型,同时保留所有私有数据分散。在系统中,传输模型参数及其更新而不是原始数据,因此通信瓶颈已成为一个关键挑战。此外,近期的较大和更深层次的机器学习模型也在将它们部署到联邦环境中的困难造成更多困难。在本文中,我们设计了一个联合的两阶段学习框架,即在设备上使用切割层增强了原型联合学习,并使用基于符号的随机梯度下降与大多数投票方法进行模型更新。剪切图层在设备上学习本地原始数据的信息和低维表示,有助于减少全局模型参数并防止数据泄漏。基于符号的SGD与大多数投票方式进行模型更新,也有助于缓解通信限制。凭经验,我们表明我们的系统是一种有效和隐私,保留联合学习计划和适用于一般应用方案的诉讼。
translated by 谷歌翻译
Dialogue state tracking (DST) aims to convert the dialogue history into dialogue states which consist of slot-value pairs. As condensed structural information memorizing all history information, the dialogue state in the last turn is typically adopted as the input for predicting the current state by DST models. However, these models tend to keep the predicted slot values unchanged, which is defined as state momentum in this paper. Specifically, the models struggle to update slot values that need to be changed and correct wrongly predicted slot values in the last turn. To this end, we propose MoNET to tackle state momentum via noise-enhanced training. First, the previous state of each turn in the training data is noised via replacing some of its slot values. Then, the noised previous state is used as the input to learn to predict the current state, improving the model's ability to update and correct slot values. Furthermore, a contrastive context matching framework is designed to narrow the representation distance between a state and its corresponding noised variant, which reduces the impact of noised state and makes the model better understand the dialogue history. Experimental results on MultiWOZ datasets show that MoNET outperforms previous DST methods. Ablations and analysis verify the effectiveness of MoNET in alleviating state momentum and improving anti-noise ability.
translated by 谷歌翻译
对接系统对于在线多人游戏中创建公平匹配至关重要,这直接影响玩家的满足感和游戏体验。大多数对接系统在很大程度上取决于对玩家游戏技能的精确估计来构建公平的游戏。但是,新手的技能等级通常是不准确的,因为当前的对接评级算法需要大量游戏才能学习新玩家的真正技能。在早期阶段使用这些不可靠的技能得分通常会导致团队绩效方面的差异,这会导致负面的游戏体验。这被称为对接评级算法的“冷启动”问题。为了克服这个难题,本文提出了QuickSkill,这是一个基于深度学习的新手技能估算框架,以快速探究在线多人游戏中新玩家的能力。 QuickSkill提取了玩家最初的几款游戏中的顺序性能功能,以通过专用的神经网络来预测他/她的未来技能评级,从而在玩家的早期游戏阶段进行准确的技能估计。通过使用Quickskill进行对接,可以在最初的冷门时期大大改善游戏公平性。我们在离线和在线场景中都在流行的移动多人游戏中进行实验。使用两个现实世界中的匿名游戏数据集获得的结果表明,提议的QuickSkill提供了对新手游戏技能的精确估计,从而导致团队技能差异明显降低和更好的玩家游戏体验。据我们所知,提议的Quickskill是第一个解决传统技能评级算法的冷门问题的框架。
translated by 谷歌翻译
We solve a fundamental challenge in semiconductor IC design: the fast and accurate characterization of nanoscale photonic devices. Much like the fusion between AI and EDA, many efforts have been made to apply DNNs such as convolutional neural networks (CNN) to prototype and characterize next-gen optoelectronic devices commonly found in photonic integrated circuits (PIC) and LiDAR. These prior works generally strive to predict the quality factor (Q) and modal volume (V) of for instance, photonic crystals, with ultra-high accuracy and speed. However, state-of-the-art models are still far from being directly applicable in the real-world: e.g. the correlation coefficient of V ($V_{coeff}$ ) is only about 80%, which is much lower than what it takes to generate reliable and reproducible nanophotonic designs. Recently, attention-based transformer models have attracted extensive interests and been widely used in CV and NLP. In this work, we propose the first-ever Transformer model (POViT) to efficiently design and simulate semiconductor photonic devices with multiple objectives. Unlike the standard Vision Transformer (ViT), we supplied photonic crystals as data input and changed the activation layer from GELU to an absolute-value function (ABS). Our experiments show that POViT exceeds results reported by previous models significantly. The correlation coefficient $V_{coeff}$ increases by over 12% (i.e., to 92.0%) and the prediction errors of Q is reduced by an order of magnitude, among several other key metric improvements. Our work has the potential to drive the expansion of EDA to fully automated photonic design. The complete dataset and code will be released to aid researchers endeavoring in the interdisciplinary field of physics and computer science.
translated by 谷歌翻译
There is a growing interest in developing unlearnable examples (UEs) against visual privacy leaks on the Internet. UEs are training samples added with invisible but unlearnable noise, which have been found can prevent unauthorized training of machine learning models. UEs typically are generated via a bilevel optimization framework with a surrogate model to remove (minimize) errors from the original samples, and then applied to protect the data against unknown target models. However, existing UE generation methods all rely on an ideal assumption called label-consistency, where the hackers and protectors are assumed to hold the same label for a given sample. In this work, we propose and promote a more practical label-agnostic setting, where the hackers may exploit the protected data quite differently from the protectors. E.g., a m-class unlearnable dataset held by the protector may be exploited by the hacker as a n-class dataset. Existing UE generation methods are rendered ineffective in this challenging setting. To tackle this challenge, we present a novel technique called Unlearnable Clusters (UCs) to generate label-agnostic unlearnable examples with cluster-wise perturbations. Furthermore, we propose to leverage VisionandLanguage Pre-trained Models (VLPMs) like CLIP as the surrogate model to improve the transferability of the crafted UCs to diverse domains. We empirically verify the effectiveness of our proposed approach under a variety of settings with different datasets, target models, and even commercial platforms Microsoft Azure and Baidu PaddlePaddle.
translated by 谷歌翻译
Gradient-based explanation is the cornerstone of explainable deep networks, but it has been shown to be vulnerable to adversarial attacks. However, existing works measure the explanation robustness based on $\ell_p$-norm, which can be counter-intuitive to humans, who only pay attention to the top few salient features. We propose explanation ranking thickness as a more suitable explanation robustness metric. We then present a new practical adversarial attacking goal for manipulating explanation rankings. To mitigate the ranking-based attacks while maintaining computational feasibility, we derive surrogate bounds of the thickness that involve expensive sampling and integration. We use a multi-objective approach to analyze the convergence of a gradient-based attack to confirm that the explanation robustness can be measured by the thickness metric. We conduct experiments on various network architectures and diverse datasets to prove the superiority of the proposed methods, while the widely accepted Hessian-based curvature smoothing approaches are not as robust as our method.
translated by 谷歌翻译
Multivariate time series forecasting with hierarchical structure is pervasive in real-world applications, demanding not only predicting each level of the hierarchy, but also reconciling all forecasts to ensure coherency, i.e., the forecasts should satisfy the hierarchical aggregation constraints. Moreover, the disparities of statistical characteristics between levels can be huge, worsened by non-Gaussian distributions and non-linear correlations. To this extent, we propose a novel end-to-end hierarchical time series forecasting model, based on conditioned normalizing flow-based autoregressive transformer reconciliation, to represent complex data distribution while simultaneously reconciling the forecasts to ensure coherency. Unlike other state-of-the-art methods, we achieve the forecasting and reconciliation simultaneously without requiring any explicit post-processing step. In addition, by harnessing the power of deep model, we do not rely on any assumption such as unbiased estimates or Gaussian distribution. Our evaluation experiments are conducted on four real-world hierarchical datasets from different industrial domains (three public ones and a dataset from the application servers of Alipay's data center) and the preliminary results demonstrate efficacy of our proposed method.
translated by 谷歌翻译
Task-oriented dialog(TOD) aims to assist users in achieving specific goals through multi-turn conversation. Recently, good results have been obtained based on large pre-trained models. However, the labeled-data scarcity hinders the efficient development of TOD systems at scale. In this work, we constructed a weakly supervised dataset based on a teacher/student paradigm that leverages a large collection of unlabelled dialogues. Furthermore, we built a modular dialogue system and integrated coarse-to-fine grained classification for user intent detection. Experiments show that our method can reach the dialog goal with a higher success rate and generate more coherent responses.
translated by 谷歌翻译